8 research outputs found

    Imaging through a projection screen using bi-stable switchable diffusive photon sieves

    Get PDF
    We designed and demonstrated a liquid crystal (LC) photon sieve (PS) device which can be integrated on a conventional diffusive projection screen and switched to record images. The device fabrication method and assembly of it by using Smectic A (SmA) LC material is also presented. In the PS state, the device comprises diffusive elements, which simultaneously allow to image the scene in front of the device on a camera sensor behind itself and display another image projected onto the device. The image captured using the PS has acceptable visual quality. The projected images, from an external picture source, on the diffusive elements can be observed with a quality comparable to the full scattering state. The projected images are observed with almost no detectable features of the device at the observation distance. The device offers a built-in solution for eye-to-eye video conferencing applications

    Imaging through a projection screen using bi-stable switchable diffusive photon sieves

    No full text
    c 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement. We designed and demonstrated a liquid crystal (LC) photon sieve (PS) device which can be integrated on a conventional diffusive projection screen and switched to record images. The device fabrication method and assembly of it by using Smectic A (SmA) LC material is also presented. In the PS state, the device comprises diffusive elements, which simultaneously allow to image the scene in front of the device on a camera sensor behind itself and display another image projected onto the device. The image captured using the PS has acceptable visual quality. The projected images, from an external picture source, on the diffusive elements can be observed with a quality comparable to the full scattering state. The projected images are observed with almost no detectable features of the device at the observation distance. The device offers a built-in solution for eye-to-eye video conferencing applications

    Visual Perception in AR/VR

    No full text
    In an online OSA Incubator Meeting, top industry and academic researchers explored the importance of accounting for aspects of the human visual system to take augmented and virtual reality to the next step

    Demonstrating a multi-primary high dynamic range display system for vision experiments

    No full text
    We describe the design, construction, calibration, and characterization of a multi-primary high dynamic range (MPHDR) display system for use in vision research. The MPHDR display is the first system to our knowledge to allowfor spatially controllable, high dynamic range stimulus generation using multiple primaries.We demonstrate the high luminance, high dynamic range, and wide color gamut output of the MPHDR display. During characterization, the MPHDR display achieved a maximum luminance of 3200 cd=m2, a maximum contrast range of 3; 240; 000 V 1, and an expanded color gamut tailored to dedicated vision research tasks that spans beyond traditional sRGB displays. We discuss how the MPHDR display could be optimized for psychophysical experiments with photoreceptor isolating stimuli achieved through the method of silent substitution. We present an example case of a range of metameric pairs of melanopsin isolating stimuli across different luminance levels, from an available melanopsin contrast of117%at 75 cd=m2 to a melanopsin contrast of23%at 2000 cd=m2

    Perception of perspective in augmented reality head-up displays

    No full text
    Augmented Reality (AR) is emerging fast with a wide range of applications, including automotive AR Head-Up Displays (AR HUD). As a result, there is a growing need to understand human perception of depth in AR. Here, we discuss two user studies on depth perception, in particular on the perspective cue. The first experiment compares the perception of the perspective depth cue (1) in the physical world, (2) on a flat-screen, and (3) on an AR HUD. Our AR HUD setup provided a two-dimensional vertically oriented virtual image projected at a fixed distance. In each setting, participants were asked to estimate the size of a perspective angle. We found that the perception of angle sizes on AR HUD differs from perception in the physical world, but not from a flat-screen. The underestimation of the physical world's angle size compared to the AR HUD and screen setup might explain the egocentric depth underestimation phenomenon in virtual environments. In the second experiment, we compared perception for different graphical representations of angles that are relevant for practical applications. Graphical alterations of angles displayed on a screen resulted in more variation between individuals' angle size estimations. Furthermore, the majority of the participants tended to underestimate the observed angle size in most conditions. Our results suggest that perspective angles on a vertically oriented fixed-depth AR HUD display mimic more accurately the perception of a screen, rather than the perception of the physical 3D environment. On-screen graphical alteration does not help to improve the underestimation in the majority of cases

    Head-up display with dynamic depth-variable viewing effect

    No full text
    Head-Up Displays (HUDs) can reduce duration and frequency of drivers looking away from traffic scenes, but information contents of different importance are usually displayed at the same time in contemporary HUD models. Such configurations increase the time that a driver searches for critical information and it is essential that the said information can quickly attract driver's attention without affecting his focus on the road. We introduce an alternative approach of displaying critical information with a variable depth in a designated local area of a HUD image. The variations are engineered to create a dynamic pop-up effect for hazard warnings, such as a car exceeding the speed-limit or approaching certain road signs. The image depth of the corresponding area is designed to vary by about half a metre and the image size by 1.4 times for a natural viewing experience, using an off-the-shelf liquid lens with electrically tuneable focus depths. The HUD optics are adjusted to have an extended eye-box to accommodate driver's head movement and a uniform image brightness across the eye-box
    corecore